Goto

Collaborating Authors

 hand-eye calibration


Fast, Robust, Permutation-and-Sign Invariant SO(3) Pattern Alignment

Sarker, Anik, Asbeck, Alan T.

arXiv.org Artificial Intelligence

Abstract--We address the correspondence-free alignment of two rotation sets on SO(3), a core task in calibration and registration that is often impeded by missing time alignment, outliers, and unknown axis conventions. T o handle axis relabels and sign flips, we introduce a Permutation-and-Sign Invariant (PASI) wrapper that enumerates the 24 proper signed permutations, scores them via summed correlations, and fuses the per-axis estimates into a single rotation by projection/Karcher mean. Experiments on EuRoC Machine Hall simulations (axis-consistent) and the ETH Hand-Eye benchmark (robot_arm_real) (axis-ambiguous) show that our methods are accurate, 6-60x faster than traditional methods, and robust under extreme outlier ratios (up to 90%), all without correspondence search. Estimating the 3D rotation that aligns one sensor or object frame to another is a fundamental problem in robotics and computer vision. Closed-form or least-squares solutions (e.g., Davenport/QUEST, SVD/Procrustes, and modern quaternion solvers) are mature [25], [26], [27], [28], [29], but they typically assume paired measurements (known correspondences) and degrade under heavy outliers or axis-convention mismatches.


Calib3R: A 3D Foundation Model for Multi-Camera to Robot Calibration and 3D Metric-Scaled Scene Reconstruction

Allegro, Davide, Terreran, Matteo, Ghidoni, Stefano

arXiv.org Artificial Intelligence

RELATED WORKS Hand-Eye Calibration: Hand-eye calibration is a well-established problem in robotics that aims to estimate the relative pose between a camera and a robot's end-effector. It is typically addressed by capturing a series of images of a known calibration pattern (e.g., a checkerboard) using a camera rigidly mounted on the robot hand, and using both the images and the corresponding robot poses to compute the camera's extrinsic parameters. Different mathematical formulations exist for solving hand-eye calibration; a widely adopted approach involves solving the equation AX = XB, where X is the unknown rigid transformation describing the pose of the camera with respect to the robot, while A and B denote the relative motions of the end-effector (from robot kinematics) and the camera (from pattern observations), respectively [31], [36]-[38]. Several other approaches were proposed: Shah [39] formulated a closed-form solution for the hand-eye problem by using an algorithm based on Singular V alue Decomposition (SVD) and the Kronecker product to solve for rotation and translation separately, while Li et al. [40] used dual quaternions to solve them simultaneously overcoming the limitations of the Kronecker product. Wang et al. [23] extended hand-eye calibration to multi-camera setups by incorporating a common reference frame but required an external motion capture system, limiting its applicability to small setups. Andreff and Heller [41], [42] proposed two similar hand-eye calibration methods that leverage the Structure-from-Motion (SfM) paradigm to estimate camera motion and introduce a formulation for hand-eye calibration that includes a factor to metrically scale camera poses.


A Certifably Correct Algorithm for Generalized Robot-World and Hand-Eye Calibration

Wise, Emmett, Kaveti, Pushyami, Chen, Qilong, Wang, Wenhao, Singh, Hanumant, Kelly, Jonathan, Rosen, David M., Giamou, Matthew

arXiv.org Artificial Intelligence

Automatic extrinsic sensor calibration is a fundamental problem for multi-sensor platforms. Reliable and general-purpose solutions should be computationally efficient, require few assumptions about the structure of the sensing environment, and demand little effort from human operators. Since the engineering effort required to obtain accurate calibration parameters increases with the number of sensors deployed, robotics researchers have pursued methods requiring few assumptions about the sensing environment and minimal effort from human operators. In this work, we introduce a fast and certifiably globally optimal algorithm for solving a generalized formulation of the $\textit{robot-world and hand-eye calibration}$ (RWHEC) problem. The formulation of RWHEC presented is "generalized" in that it supports the simultaneous estimation of multiple sensor and target poses, and permits the use of monocular cameras that, alone, are unable to measure the scale of their environments. In addition to demonstrating our method's superior performance over existing solutions, we derive novel identifiability criteria and establish $\textit{a priori}$ guarantees of global optimality for problem instances with bounded measurement errors. We also introduce a complementary Lie-algebraic local solver for RWHEC and compare its performance with our global method and prior art. Finally, we provide a free and open-source implementation of our algorithms and experiments.


PlaneHEC: Efficient Hand-Eye Calibration for Multi-view Robotic Arm via Any Point Cloud Plane Detection

Wang, Ye, Jing, Haodong, Liao, Yang, Ma, Yongqiang, Zheng, Nanning

arXiv.org Artificial Intelligence

Hand-eye calibration is an important task in vision-guided robotic systems and is crucial for determining the transformation matrix between the camera coordinate system and the robot end-effector. Existing methods, for multi-view robotic systems, usually rely on accurate geometric models or manual assistance, generalize poorly, and can be very complicated and inefficient. Therefore, in this study, we propose PlaneHEC, a generalized hand-eye calibration method that does not require complex models and can be accomplished using only depth cameras, which achieves the optimal and fastest calibration results using arbitrary planar surfaces like walls and tables. PlaneHEC introduces hand-eye calibration equations based on planar constraints, which makes it strongly interpretable and generalizable. PlaneHEC also uses a comprehensive solution that starts with a closed-form solution and improves it withiterative optimization, which greatly improves accuracy. We comprehensively evaluated the performance of PlaneHEC in both simulated and real-world environments and compared the results with other point-cloud-based calibration methods, proving its superiority. Our approach achieves universal and fast calibration with an innovative design of computational models, providing a strong contribution to the development of multi-agent systems and embodied intelligence.


3D Hand-Eye Calibration for Collaborative Robot Arm: Look at Robot Base Once

Li, Leihui, Wan, Lixuepiao, Krueger, Volker, Zhang, Xuping

arXiv.org Artificial Intelligence

Hand-eye calibration is a common problem in the field of collaborative robotics, involving the determination of the transformation matrix between the visual sensor and the robot flange to enable vision-based robotic tasks. However, this process typically requires multiple movements of the robot arm and an external calibration object, making it both time-consuming and inconvenient, especially in scenarios where frequent recalibration is necessary. In this work, we extend our previous method which eliminates the need for external calibration objects such as a chessboard. We propose a generic dataset generation approach for point cloud registration, focusing on aligning the robot base point cloud with the scanned data. Furthermore, a more detailed simulation study is conducted involving several different collaborative robot arms, followed by real-world experiments in an industrial setting. Our improved method is simulated and evaluated using a total of 14 robotic arms from 9 different brands, including KUKA, Universal Robots, UFACTORY, and Franka Emika, all of which are widely used in the field of collaborative robotics. Physical experiments demonstrate that our extended approach achieves performance comparable to existing commercial hand-eye calibration solutions, while completing the entire calibration procedure in just a few seconds. In addition, we provide a user-friendly hand-eye calibration solution, with the code publicly available at github.com/leihui6/LRBO.


EasyHeC++: Fully Automatic Hand-Eye Calibration with Pretrained Image Models

Hong, Zhengdong, Zheng, Kangfu, Chen, Linghao

arXiv.org Artificial Intelligence

Hand-eye calibration plays a fundamental role in robotics by directly influencing the efficiency of critical operations such as manipulation and grasping. In this work, we present a novel framework, EasyHeC++, designed for fully automatic hand-eye calibration. In contrast to previous methods that necessitate manual calibration, specialized markers, or the training of arm-specific neural networks, our approach is the first system that enables accurate calibration of any robot arm in a marker-free, training-free, and fully automatic manner. Our approach employs a two-step process. First, we initialize the camera pose using a sampling or feature-matching-based method with the aid of pretrained image models. Subsequently, we perform pose optimization through differentiable rendering. Extensive experiments demonstrate the system's superior accuracy in both synthetic and real-world datasets across various robot arms and camera settings. Project page: https://ootts.github.io/easyhec_plus.


Generative Adversarial Networks for Solving Hand-Eye Calibration without Data Correspondence

Hong, Ilkwon, Ha, Junhyoung

arXiv.org Artificial Intelligence

In this study, we rediscovered the framework of generative adversarial networks (GANs) as a solver for calibration problems without data correspondence. When data correspondence is not present or loosely established, the calibration problem becomes a parameter estimation problem that aligns the two data distributions. This procedure is conceptually identical to the underlying principle of GAN training in which networks are trained to match the generative distribution to the real data distribution. As a primary application, this idea is applied to the hand-eye calibration problem, demonstrating the proposed method's applicability and benefits in complicated calibration problems.


On Flange-based 3D Hand-Eye Calibration for Soft Robotic Tactile Welding

Han, Xudong, Guo, Ning, Jie, Yu, Wang, He, Wan, Fang, Song, Chaoyang

arXiv.org Artificial Intelligence

This paper investigates the direct application of standardized designs on the robot for conducting robot hand-eye calibration by employing 3D scanners with collaborative robots. The well-established geometric features of the robot flange are exploited by directly capturing its point cloud data. In particular, an iterative method is proposed to facilitate point cloud processing toward a refined calibration outcome. Several extensive experiments are conducted over a range of collaborative robots, including Universal Robots UR5 & UR10 e-series, Franka Emika, and AUBO i5 using an industrial-grade 3D scanner Photoneo Phoxi S & M and a commercial-grade 3D scanner Microsoft Azure Kinect DK. Experimental results show that translational and rotational errors converge efficiently to less than 0.28 mm and 0.25 degrees, respectively, achieving a hand-eye calibration accuracy as high as the camera's resolution, probing the hardware limit. A welding seam tracking system is presented, combining the flange-based calibration method with soft tactile sensing. The experiment results show that the system enables the robot to adjust its motion in real-time, ensuring consistent weld quality and paving the way for more efficient and adaptable manufacturing processes.

  Country: Asia > China > Guangdong Province > Shenzhen (0.05)
  Genre: Research Report > New Finding (0.86)
  Industry: Information Technology (0.34)

Multi-Camera Hand-Eye Calibration for Human-Robot Collaboration in Industrial Robotic Workcells

Allegro, Davide, Terreran, Matteo, Ghidoni, Stefano

arXiv.org Artificial Intelligence

In industrial scenarios, effective human-robot collaboration relies on multi-camera systems to robustly monitor human operators despite the occlusions that typically show up in a robotic workcell. In this scenario, precise localization of the person in the robot coordinate system is essential, making the hand-eye calibration of the camera network critical. This process presents significant challenges when high calibration accuracy should be achieved in short time to minimize production downtime, and when dealing with extensive camera networks used for monitoring wide areas, such as industrial robotic workcells. Our paper introduces an innovative and robust multi-camera hand-eye calibration method, designed to optimize each camera's pose relative to both the robot's base and to each other camera. This optimization integrates two types of key constraints: i) a single board-to-end-effector transformation, and ii) the relative camera-to-camera transformations. We demonstrate the superior performance of our method through comprehensive experiments employing the METRIC dataset and real-world data collected on industrial scenarios, showing notable advancements over state-of-the-art techniques even using less than 10 images. Additionally, we release an open-source version of our multi-camera hand-eye calibration algorithm at https://github.com/davidea97/Multi-Camera-Hand-Eye-Calibration.git.


Semi-Autonomous Laparoscopic Robot Docking with Learned Hand-Eye Information Fusion

Tian, Huanyu, Huber, Martin, Mower, Christopher E., Han, Zhe, Li, Changsheng, Duan, Xingguang, Bergeles, Christos

arXiv.org Artificial Intelligence

In this study, we introduce a novel shared-control system for key-hole docking operations, combining a commercial camera with occlusion-robust pose estimation and a hand-eye information fusion technique. This system is used to enhance docking precision and force-compliance safety. To train a hand-eye information fusion network model, we generated a self-supervised dataset using this docking system. After training, our pose estimation method showed improved accuracy compared to traditional methods, including observation-only approaches, hand-eye calibration, and conventional state estimation filters. In real-world phantom experiments, our approach demonstrated its effectiveness with reduced position dispersion (1.23\pm 0.81 mm vs. 2.47 \pm 1.22 mm) and force dispersion (0.78\pm 0.57 N vs. 1.15 \pm 0.97 N) compared to the control group. These advancements in semi-autonomy co-manipulation scenarios enhance interaction and stability. The study presents an anti-interference, steady, and precision solution with potential applications extending beyond laparoscopic surgery to other minimally invasive procedures.